|
Laws of Robotics are a set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence. The best known set of laws are those written by Isaac Asimov in the 1940s, or based upon them, but other sets of laws have been proposed by researchers in the decades since then. ==Isaac Asimov's "Three Laws of Robotics" == (詳細はIsaac Asimov's "Three Laws of Robotics". These were introduced in his 1942 short story "Runaround", although they were foreshadowed in a few earlier stories. The Three Laws are: # A robot may not injure a human being or, through inaction, allow a human being to come to harm. # A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. # A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. In later books, a zeroth law was introduced: 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. Certain robots developed the zeroth law as the logical extension of the First Law, as robots are often faced with ethical dilemmas in which any result will harm at least some humans, in order to avoid harming more humans. The classic example involves a robot who sees a runaway train which will kill ten humans trapped on the tracks, whose only choice is to switch the track the train is following so it only kills one human. Even so, the robots who theorized the zeroth law only developed it as a hypothetical, almost philosophical abstraction. Some robots use this as a license to try to conquer humanity for its own protection, but others are hesitant to implement the Zeroth Law, because in practice they aren't even certain what it means. Some robots are uncertain about which course of action will prevent harm to the most humans in the long run, while others point out that "humanity" is such an abstract concept that they wouldn't even know if they were harming it or not. A few even point out that they aren't certain what qualifies as "harm": if this restriction would simply prohibit physical harm, or if social harm is also forbidden. In this last case, conquering humanity in order to implement tyrannical controls to prevent physical harm between humans (i.e. ending all human warfare) might nonetheless constitute a social harm to humanity as a whole. Adaptations and extensions exist based upon this framework. As of 2011 they remain a "fictional device". 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Laws of robotics」の詳細全文を読む スポンサード リンク
|